human psychology
Large Language Models Do Not Simulate Human Psychology
Schröder, Sarah, Morgenroth, Thekla, Kuhl, Ulrike, Vaquet, Valerie, Paaßen, Benjamin
Large Language Models (LLMs),such as ChatGPT, are increasingly used in research, ranging from simple writing assistance to complex data annotation tasks. Recently, some research has suggested that LLMs may even be able to simulate human psychology and can, hence, replace human participants in psychological studies. We caution against this approach. We provide conceptual arguments against the hypothesis that LLMs simulate human psychology. We then present empiric evidence illustrating our arguments by demonstrating that slight changes to wording that correspond to large changes in meaning lead to notable discrepancies between LLMs' and human responses, even for the recent CENTAUR model that was specifically fine-tuned on psychological responses. Additionally, different LLMs show very different responses to novel items, further illustrating their lack of reliability. We conclude that LLMs do not simulate human psychology and recommend that psychological researchers should treat LLMs as useful but fundamentally unreliable tools that need to be validated against human responses for every new application.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > Mexico > Puebla (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Government (1.00)
- Banking & Finance (0.64)
Researching interdisciplinary methods in computational creativity – interview with Nadia Ady and Faun Rice
Nadia Ady and Faun Rice are working on a research project exploring where artificial intelligence (AI) researchers find inspiration and ideas about human intelligence and what approaches they use to translate ideas from the disciplines that study human intelligence (e.g. We spoke to Nadia and Faun about the project, what they've learnt so far, and how they plan to further develop the work. Faun: We are doing a multidisciplinary project – I'm from the social sciences, while Nadia works on artificial intelligence. We're interviewing other artificial intelligence researchers who work with direct analogs from human psychology and try to translate them for machines in some way. For one, we talk to them about how they find the definitions that they're working with.
The artificial intelligence developed by Harvard University determines the shortest path to human happiness
Researchers have created a numerical model of psychology that aims to improve mental health. The system provides superior customization and outlines the shortest path toward a set of mental stability for any individual. Deep Longevity published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, Authority on happiness and beauty. The authors created two numerical models of human psychology based on data from a US midlife study. The first model is a set of deep neural networks which predict respondents' chronological age and psychological well-being over 10 years using information from psychological surveys.
Harvard Developed AI Identifies the Shortest Path to Human Happiness
The researchers created a digital model of psychology aimed to improve mental health. The system offers superior personalization and identifies the shortest path toward a cluster of mental stability for any individual. Deep Longevity has published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, an authority on happiness and beauty. The authors created two digital models of human psychology based on data from the Midlife in the United States study. The first model is an ensemble of deep neural networks that predicts respondents' chronological age and psychological well-being in 10 years using information from a psychological survey.
Could Artificial Intelligence Do More Harm Than Good To Society? - AI Summary
In an increasingly digitized world, the artificial intelligence (AI) boom is only getting started. But could the risks of artificial intelligence outweigh the potential benefits these technologies might lend to society in the years ahead? In this segment of Backstage Pass, recorded on Dec. 14, Fool contributors Asit Sharma, Rachel Warren, and Demitri Kalogeropoulos discuss. I think artificial intelligence is going to continue to provide both benefits as well as detriment to society. It's empowering these law enforcement agencies to have a more efficient way of tracking down criminals, keeping people safer. Are these algorithms judging people equally or are they including certain things that single out certain individuals that may or may not be fair in the long run and may, in fact, result in less justice?
Machine Learning: Makes Human to Train Them
Machine learning is one of the technology that has become more and more popular with time and machine learning is the subset of the Artificial Intelligence which comes to your knowledge when you are connected to IT industry. Most of the companies like Netflix, Google and smaller companies uses Machine learning algoithms to predict the insights from the data. Although terms like artificial intelligence, machine learning and deep learning are used interchangeably but, they are not the same thing. Machine learning is the subset of artificial intelligence and deep learning is a subset of machine learning. Alan Turing's vision towards machine learning is being explained in one of his seminal paper such as " Machine learning is an application of artificial intelligence where a computer/machine learns from the past experiences (input data) and make future predictions. The performance of such a system should be at least human level."
- Media > Television (0.35)
- Media > Film (0.35)
- Information Technology > Security & Privacy (0.35)
How AI and Deep Learning Help Explain Human Fear - iQ by Intel
Researchers are breaking down the barrier between people and machines by teaching computers to recognize fear. On the 4th floor of the pristine Media Lab Complex at MIT lives a Nightmare Machine. These computers earned that nickname for a reason: they have been learning how to terrify people. A series of algorithms generates disturbing and grotesque images, like movie monsters, dead people, and other things that go bump in the night. "We wanted to playfully explore how artificial intelligence (AI) can become a demon that learns how to scare you," said Pinar Yanardag Delul, one of the creators of the gore-loving computer program.
- Transportation > Passenger (0.75)
- Transportation > Ground > Road (0.51)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.31)